Pegasus Tutorial

Compiling date: 2020-08-11

In [1]:
import pegasus as pg

Count Matrix File

For this tutorial, we provide a count matrix dataset on Human Bone Marrow with 8 donors stored in zarr format (with file extension ".zarr.zip").

You can download the data at https://storage.googleapis.com/terra-featured-workspaces/Cumulus/MantonBM_nonmix_subset.zarr.zip.

This file is achieved by aggregating gene-count matrices of the 8 10X channels using PegasusIO, and further filtering out cells with fewer than $100$ genes expressed. Please see here for how to do it interactively.

Now load the file using pegasus read_input function:

In [2]:
data = pg.read_input("MantonBM_nonmix_subset.zarr.zip")
data
2020-08-11 22:08:36,142 - pegasusio.readwrite - INFO - zarr file 'MantonBM_nonmix_subset.zarr.zip' is loaded.
2020-08-11 22:08:36,142 - pegasusio.readwrite - INFO - Function 'read_input' finished in 0.23s.
Out[2]:
MultimodalData object with 1 UnimodalData: 'GRCh38-rna'
    It currently binds to UnimodalData object GRCh38-rna

UnimodalData object with n_obs x n_vars = 48219 x 36601
    Genome: GRCh38; Modality: rna
    It contains 1 matrices: 'X'
    It currently binds to matrix 'X' as X

    obs: 'n_genes', 'Channel'
    var: 'featureid'
    obsm: 
    varm: 
    uns: 'genome', 'modality'

The count matrix is managed as a UnimodalData object defined in PegasusIO module, and users can manipulate the data from top level via MultimodalData structure, which can contain multiple UnimodalData objects as members.

For this example, as show above, data is a MultimodalData object, with only one UnimodalData member of key "GRCh38-rna", which is its default UnimodalData. Any operation on data will be applied to this default UnimodalData object.

UnimodalData has a structure similar to AnnData defined in anndata package (see anndata.AnnData for details; the figure below is also borrowed from this link):

It has 5 major parts:

  • Raw count matrix: data.X, a Scipy sparse matrix, with rows the cell barcodes, columns the genes/features:
In [3]:
data.X
Out[3]:
<48219x36601 sparse matrix of type '<class 'numpy.int32'>'
	with 39997174 stored elements in Compressed Sparse Row format>

This dataset contains $48,219$ barcodes and $36,601$ genes.

  • Cell barcode attributes: data.obs, a Pandas data frame with barcode as the index. For now, there is only one attribute "Channel" referring to the donor from which the cell comes from:
In [4]:
data.obs.head()
Out[4]:
n_genes Channel
barcodekey
MantonBM1_HiSeq_1-AAACCTGAGCAGGTCA 816 MantonBM1_HiSeq_1
MantonBM1_HiSeq_1-AAACCTGCACACTGCG 716 MantonBM1_HiSeq_1
MantonBM1_HiSeq_1-AAACCTGCACCGGAAA 554 MantonBM1_HiSeq_1
MantonBM1_HiSeq_1-AAACCTGCATAGACTC 967 MantonBM1_HiSeq_1
MantonBM1_HiSeq_1-AAACCTGCATCGATGT 1704 MantonBM1_HiSeq_1
In [5]:
data.obs['Channel'].value_counts()
Out[5]:
MantonBM6_HiSeq_1    6748
MantonBM8_HiSeq_1    6092
MantonBM4_HiSeq_1    6068
MantonBM7_HiSeq_1    6025
MantonBM5_HiSeq_1    5963
MantonBM2_HiSeq_1    5930
MantonBM1_HiSeq_1    5837
MantonBM3_HiSeq_1    5556
Name: Channel, dtype: int64
  • Gene attributes: data.var, also a Pandas data frame, with gene name as the index. For now, it only has one attribute "gene_ids" referring to the unique gene ID in the experiment:
In [6]:
data.var.head()
Out[6]:
featureid
featurekey
MIR1302-2HG ENSG00000243485
FAM138A ENSG00000237613
OR4F5 ENSG00000186092
AL627309.1 ENSG00000238009
AL627309.3 ENSG00000239945
  • Unstructured information: data.uns, a Python hashed dictionary. It usually stores information not restricted to barcodes or features, but about the whole dataset, such as its genome reference and modality type:
In [7]:
data.uns['genome']
Out[7]:
'GRCh38'
In [8]:
data.uns['modality']
Out[8]:
'rna'
  • Finally, embedding attributes on cell barcodes: data.obsm; as well as on genes, data.varm. We'll see it in later sections.

Preprocessing

Filtration

The first step in preprocessing is to perform the quality control analysis, and remove cells and genes of low quality.

We can generate QC metrics using the following method with default settings:

In [9]:
pg.qc_metrics(data, percent_mito=10)

The metrics considered are:

  • Number of genes: keep cells with $500 \leq \text{# Genes} < 6000$ (Default);
  • Number of UMIs: don't filter cells due to UMI bounds (Default);
  • Percent of Mitochondrial genes: keep cells with percent $< 10\%$.

For details on customizing your own thresholds, see documentation.

Numeric summaries on filtration on cell barcodes and genes can be achieved by get_filter_stats method:

In [10]:
df_qc = pg.get_filter_stats(data)
df_qc
Out[10]:
kept median_n_genes median_n_umis median_percent_mito filt total median_n_genes_before median_n_umis_before median_percent_mito_before
Channel
MantonBM5_HiSeq_1 4090 770 2795.0 3.136190 1873 5963 650.0 2139.0 3.399615
MantonBM4_HiSeq_1 4172 790 2278.5 3.271181 1896 6068 672.0 1764.0 3.519009
MantonBM3_HiSeq_1 4225 779 2621.0 3.274451 1331 5556 715.0 2229.0 3.449398
MantonBM1_HiSeq_1 4415 790 2533.0 3.713331 1422 5837 723.0 2149.0 3.855422
MantonBM7_HiSeq_1 4452 745 2403.5 3.053718 1573 6025 679.0 2053.0 3.177570
MantonBM8_HiSeq_1 4511 735 2561.0 3.520510 1581 6092 671.5 2212.0 3.706849
MantonBM6_HiSeq_1 4665 852 2700.0 3.032258 2083 6748 741.0 2129.0 3.345829
MantonBM2_HiSeq_1 4935 801 2486.0 3.514056 995 5930 756.0 2261.5 3.534756

The results is a Pandas data frame on samples.

You can also check the QC stats via plots. Below is on number of genes:

In [11]:
pg.qcviolin(data, plot_type='gene', dpi=100)

Then on number of UMIs:

In [12]:
pg.qcviolin(data, plot_type='count', dpi=100)

On number of percentage of mitochondrial genes:

In [13]:
pg.qcviolin(data, plot_type='mito', dpi=100)

Now filter cells based on QC metrics set in qc_metrics:

In [14]:
pg.filter_data(data)
2020-08-11 22:08:41,380 - pegasusio.qc_utils - INFO - After filtration, 35465 out of 48219 cell barcodes are kept in UnimodalData object GRCh38-rna.

You can see that $35,465$ cells ($73.55\%$) are kept.

Moreover, for genes, only those with no cell expression are removed. After that, we identify robust genes for downstream analysis:

In [15]:
pg.identify_robust_genes(data)
2020-08-11 22:08:41,843 - pegasus.tools.preprocessing - INFO - After filtration, 25653/36601 genes are kept. Among 25653 genes, 17516 genes are robust.

The metric is the following:

  • Gene is expressed in at least $0.05\%$ of cells, i.e. among every 6000 cells, there are at least 3 cells expressing this gene.

Please see its documentation for details.

As a result, $25,653$ ($70.09\%$) genes are kept. Among them, $17,516$ are robust.

We can now view the cells of each sample after filtration:

In [16]:
data.obs['Channel'].value_counts()
Out[16]:
MantonBM2_HiSeq_1    4935
MantonBM6_HiSeq_1    4665
MantonBM8_HiSeq_1    4511
MantonBM7_HiSeq_1    4452
MantonBM1_HiSeq_1    4415
MantonBM3_HiSeq_1    4225
MantonBM4_HiSeq_1    4172
MantonBM5_HiSeq_1    4090
Name: Channel, dtype: int64

Normalization and Logarithmic Transformation

After filtration, we need to first normalize the distribution of cells w.r.t. each gene to have the same count (default is $10^5$, see documentation), and then transform into logarithmic space by $log(x + 1)$ to avoid number explosion:

In [17]:
pg.log_norm(data)
2020-08-11 22:08:42,670 - pegasus.tools.preprocessing - INFO - Function 'log_norm' finished in 0.81s.

For the downstream analysis, we may need to make a copy of the count matrix, in case of coming back to this step and redo the analysis:

In [18]:
data_trial = data.copy()

Highly Variable Gene Selection

Highly Variable Genes (HVG) are more likely to convey information discriminating different cell types and states. Thus, rather than considering all genes, people usually focus on selected HVGs for downstream analyses.

You need to set consider_batch flag to consider or not consider batch effect. At this step, set it to False:

In [19]:
pg.highly_variable_features(data_trial, consider_batch=False)
2020-08-11 22:08:43,384 - pegasus.tools.hvf_selection - INFO - 2000 highly variable features have been selected.
2020-08-11 22:08:43,384 - pegasus.tools.hvf_selection - INFO - Function 'highly_variable_features' finished in 0.48s.

By default, we select 2000 HVGs using the pegasus selection method. Alternative, you can also choose the traditional method that both Seurat and SCANPY use, by setting flavor='Seurat'. See documentation for details.

We can view HVGs by ranking them from top:

In [20]:
data_trial.var.loc[data_trial.var['highly_variable_features']].sort_values(by='hvf_rank')
Out[20]:
featureid n_cells percent_cells robust highly_variable_features mean var hvf_loess hvf_rank
featurekey
LYZ ENSG00000090382 8566 24.153391 True True 1.526394 8.110593 3.775877 3
S100A9 ENSG00000163220 8182 23.070633 True True 1.423051 7.649136 3.657401 5
S100A8 ENSG00000143546 7674 21.638235 True True 1.328663 7.228290 3.463023 7
HLA-DRA ENSG00000204287 14836 41.832793 True True 2.242152 7.513027 4.208065 12
GNLY ENSG00000115523 5196 14.651064 True True 0.882394 4.859684 2.504140 13
... ... ... ... ... ... ... ... ... ...
AL355916.1 ENSG00000232774 99 0.279148 True True 0.010097 0.037130 0.032563 5557
MEI1 ENSG00000167077 1813 5.112082 True True 0.178823 0.611417 0.574647 5559
AL035701.1 ENSG00000231769 222 0.625969 True True 0.021673 0.078620 0.070820 5563
KIAA1324 ENSG00000116299 155 0.437051 True True 0.015310 0.054929 0.048948 5564
AC005332.2 ENSG00000267731 62 0.174820 True True 0.006563 0.025189 0.021578 5567

2000 rows × 9 columns

We can also view HVGs in a scatterplot:

In [21]:
pg.hvfplot(data_trial, dpi=200)

In this plot, each point stands for one gene. Blue points are selected to be HVGs, which account for the majority of variation of the dataset.

Principal Component Analysis

To reduce the dimension of data, Principal Component Analysis (PCA) is widely used. Briefly speaking, PCA transforms the data from original dimensions into a new set of Principal Components (PC) of a much smaller size. In the transformed data, dimension is reduced, while PCs still cover a majority of the variation of data. Moreover, the new dimensions (i.e. PCs) are independent with each other.

pegasus uses the following method to perform PCA:

In [22]:
pg.pca(data_trial, robust=True)
2020-08-11 22:08:50,730 - pegasus.tools.preprocessing - INFO - Function 'pca' finished in 5.81s.

By default, pca uses:

  • Before PCA, scale the data to standard Normal distribution $N(0, 1)$, and truncate them with max value $10$;
  • Number of PCs to compute: 50;
  • Apply PCA only to highly variable features.

In addition, I set robust=True for reproducibility purpose. In practice, the default setting uses randomized PCA for fast computation.

See its documentation for customization.

To explain the meaning of PCs, let's look at the first PC (denoted as $PC_1$), which covers the most of variation:

In [23]:
coord_pc1 = data_trial.uns['PCs'][:, 0]
coord_pc1
Out[23]:
array([ 0.02224391,  0.01771348, -0.00588402, ..., -0.00049834,
        0.04844721,  0.03548852], dtype=float32)

This is an array of 2000 elements, each of which is a coefficient corresponding to one HVG.

With the HVGs as the following:

In [24]:
data_trial.var.loc[data_trial.var['highly_variable_features']].index.values
Out[24]:
array(['HES4', 'ISG15', 'TNFRSF18', ..., 'RPS4Y2', 'MT-CO1', 'MT-CO3'],
      dtype=object)

$PC_1$ is computed by

\begin{equation*} PC_1 = \text{coord_pc1}[0] \cdot \text{HES4} + \text{coord_pc1}[1] \cdot \text{ISG15} + \text{coord_pc1}[2] \cdot \text{TNFRSF18} + \cdots + \text{coord_pc1}[1997] \cdot \text{RPS4Y2} + \text{coord_pc1}[1998] \cdot \text{MT-CO1} + \text{coord_pc1}[1999] \cdot \text{MT-CO3} \end{equation*}

Therefore, all the 50 PCs are the linear combinations of the 2000 HVGs.

The calculated PCA count matrix is stored in the obsm field, which is the first embedding object we have

In [25]:
data_trial.obsm['X_pca'].shape
Out[25]:
(35465, 50)

For each of the $35,465$ cells, its count is now w.r.t. 50 PCs, instead of 2000 HVGs.

Nearest Neighbors

All the downstream analysis, including clustering and visualization, needs to construct a k-Nearest-Neighbor (kNN) graph on cells. We can build such a graph using neighbors method:

In [26]:
pg.neighbors(data_trial)
2020-08-11 22:08:55,351 - pegasus.tools.nearest_neighbors - INFO - Function 'get_neighbors' finished in 4.59s.
2020-08-11 22:08:56,364 - pegasus.tools.nearest_neighbors - INFO - Function 'calculate_affinity_matrix' finished in 1.01s.

It uses the default setting:

  • For each cell, calculate its 100 nearest neighbors;
  • Use PCA matrix for calculation;
  • Use L2 distance as the metric;
  • Use hnswlib search algorithm to calculate the approximated nearest neighbors in a really short time.

See its documentation for customization.

Below is the result:

In [27]:
print(f"Get {data_trial.uns['pca_knn_indices'].shape[1]} nearest neighbors (excluding itself) for each cell.")
data_trial.uns['pca_knn_indices']
Get 99 nearest neighbors (excluding itself) for each cell.
Out[27]:
array([[30526,  2829, 30274, ..., 33708, 27448, 30245],
       [ 2651,  1870, 29723, ..., 28495, 28906,   430],
       [20161, 28285, 30032, ..., 15517, 35175, 31686],
       ...,
       [31460, 30824,  1165, ...,  6652, 19964, 34164],
       [34709, 34837, 31492, ..., 17719, 31436, 34653],
       [31722, 35401, 11884, ..., 32301, 30608, 17891]])
In [28]:
data_trial.uns['pca_knn_distances']
Out[28]:
array([[ 4.5018096,  5.199733 ,  5.4162574, ...,  6.318805 ,  6.323923 ,
         6.324403 ],
       [ 8.134447 ,  8.741686 ,  8.861305 , ..., 10.394341 , 10.403357 ,
        10.403537 ],
       [ 4.682442 ,  4.722567 ,  4.786066 , ...,  5.741749 ,  5.741947 ,
         5.7449093],
       ...,
       [ 7.5659328,  7.6666074,  7.874342 , ...,  9.455906 ,  9.457054 ,
         9.465961 ],
       [ 7.981716 ,  9.542673 ,  9.682436 , ..., 12.257049 , 12.257868 ,
        12.279212 ],
       [ 6.3852005,  7.311938 ,  7.3957562, ..., 11.366216 , 11.371264 ,
        11.405033 ]], dtype=float32)

Each row corresponds to one cell, listing its neighbors (not including itself) from nearest to farthest. data_trial.uns['pca_knn_indices'] stores their indices, and data_trial.uns['pca_knn_distances'] stores distances.

Clustering and Visualization

Now we are ready to cluster the data for cell type detection. pegasus provides 4 clustering algorithms to use:

  • louvain: Louvain algorithm, using louvain package.
  • leiden: Leiden algorithm, using leidenalg package.
  • spectral_louvain: Spectral Louvain algorithm, which requires Diffusion Map.
  • spectral_leiden: Spectral Leiden algorithm, which requires Diffusion Map.

See this documentation for details.

In this tutorial, we use the Louvain algorithm:

In [29]:
pg.louvain(data_trial)
2020-08-11 22:08:57,661 - pegasus.tools.graph_operations - INFO - Function 'construct_graph' finished in 1.23s.
2020-08-11 22:09:15,553 - pegasus.tools.clustering - INFO - Louvain clustering is done. Get 19 clusters. Time spent = 19.13s.

As a result, Louvain algorithm finds 19 clusters:

In [30]:
data_trial.obs['louvain_labels'].value_counts()
Out[30]:
1     4725
2     4472
3     4315
4     3791
5     2844
6     2711
7     2340
8     1783
9     1631
10    1277
11    1035
12     940
13     900
14     580
15     527
16     434
17     394
18     388
19     378
Name: louvain_labels, dtype: int64

We can check each cluster's composition regarding donors via a composition plot:

In [31]:
pg.compo_plot(data_trial, 'louvain_labels', 'Channel')

However, we can see a clear batch effect in the plot: e.g. Cluster 10 and 13 have most cells from Donor 3.

We can see it more clearly in its FIt-SNE plot (a visualization algorithm which we will talk about later):

In [32]:
pg.fitsne(data_trial)
2020-08-11 22:09:52,130 - pegasus.tools.visualization - INFO - Function 'fitsne' finished in 35.58s.
In [33]:
pg.scatter(data_trial, attrs=['louvain_labels', 'Channel'], basis='fitsne')

Batch Correction

Batch effect occurs when data samples are generated in different conditions, such as date, weather, lab setting, equipment, etc. Unless informed that all the samples were generated under the similar condition, people may suspect presumable batch effects if they see a visualization graph with samples kind-of isolated from each other.

For this dataset, we need the batch correction step to reduce such a batch effect, which is observed in the plot above.

In this tutorial, we use Harmony algorithm for batch correction. It requires redo HVG selection, calculate new PCA coordinates, and apply the correction:

In [34]:
pg.highly_variable_features(data, consider_batch=True)
pg.pca(data, robust=True)
pca_key = pg.run_harmony(data)
2020-08-11 22:09:54,137 - pegasus.tools.hvf_selection - INFO - Estimation on feature statistics per channel is finished. Time spent = 0.69s.
2020-08-11 22:09:54,178 - pegasus.tools.hvf_selection - INFO - 2000 highly variable features have been selected.
2020-08-11 22:09:54,179 - pegasus.tools.hvf_selection - INFO - Function 'highly_variable_features' finished in 0.73s.
2020-08-11 22:09:59,844 - pegasus.tools.preprocessing - INFO - Function 'pca' finished in 5.66s.
2020-08-11 22:10:03,618 - pegasus.tools.batch_correction - INFO - Start integration using Harmony.
	Initialization is completed.
	Completed 1 / 10 iteration(s).
	Completed 2 / 10 iteration(s).
	Completed 3 / 10 iteration(s).
	Completed 4 / 10 iteration(s).
	Completed 5 / 10 iteration(s).
	Completed 6 / 10 iteration(s).
	Completed 7 / 10 iteration(s).
	Completed 8 / 10 iteration(s).
	Completed 9 / 10 iteration(s).
	Completed 10 / 10 iteration(s).
Reach convergence after 10 iteration(s).
2020-08-11 22:10:23,017 - pegasus.tools.batch_correction - INFO - Function 'run_harmony' finished in 23.17s.

The corrected PCA coordinates are stored in data.obsm:

In [35]:
data.obsm[f"X_{pca_key}"].shape
Out[35]:
(35465, 50)

Repeat Previous Steps on the Corrected Data

As the count matrix is changed by batch correction, we need to recalculate nearest neighbors and perform clustering. Don't forget to use the corrected PCA coordinates as the representation:

In [36]:
pg.neighbors(data, rep=pca_key)
pg.louvain(data, rep=pca_key)
2020-08-11 22:10:28,293 - pegasus.tools.nearest_neighbors - INFO - Function 'get_neighbors' finished in 5.26s.
2020-08-11 22:10:29,258 - pegasus.tools.nearest_neighbors - INFO - Function 'calculate_affinity_matrix' finished in 0.97s.
2020-08-11 22:10:30,641 - pegasus.tools.graph_operations - INFO - Function 'construct_graph' finished in 1.38s.
2020-08-11 22:10:51,165 - pegasus.tools.clustering - INFO - Louvain clustering is done. Get 16 clusters. Time spent = 21.91s.

Let's check the composition plot now:

In [37]:
pg.compo_plot(data, 'louvain_labels', 'Channel')

If everything goes properly, you should be able to see that no cluster has a dominant donor cells. Also notice that Louvain algorithm on the corrected data finds 16 clusters, instead of the original 19 ones.

Also, FIt-SNE plot is different:

In [38]:
pg.fitsne(data, rep=pca_key)
2020-08-11 22:11:26,237 - pegasus.tools.visualization - INFO - Function 'fitsne' finished in 34.17s.
In [39]:
pg.scatter(data, attrs=['louvain_labels', 'Channel'], basis='fitsne')

You can see that the right-hand-side plot has a much better mixture of cells from different donors.

Visualization

tSNE Plot

In previous sections, we have seen data visualization using FIt-SNE. FIt-SNE is a fast implementation on tSNE algorithm. Including FIt-SNE, pegasus provides 3 different tSNE plotting methods:

UMAP Plot

Besides tSNE, pegasus also provides UMAP plotting methods:

Below is the UMAP plot of the data using umap method:

In [40]:
pg.umap(data, rep=pca_key)
2020-08-11 22:11:27,513 - pegasus.tools.visualization - INFO - UMAP(min_dist=0.5, random_state=0, verbose=True)
2020-08-11 22:11:27,513 - pegasus.tools.visualization - INFO - Construct fuzzy simplicial set
2020-08-11 22:11:29,485 - pegasus.tools.visualization - INFO - Construct embedding
	completed  0  /  200 epochs
	completed  20  /  200 epochs
	completed  40  /  200 epochs
	completed  60  /  200 epochs
	completed  80  /  200 epochs
	completed  100  /  200 epochs
	completed  120  /  200 epochs
	completed  140  /  200 epochs
	completed  160  /  200 epochs
	completed  180  /  200 epochs
2020-08-11 22:11:49,684 - pegasus.tools.visualization - INFO - UMAP is calculated. Time spent = 22.19s.
2020-08-11 22:11:49,685 - pegasus.tools.visualization - INFO - Function 'umap' finished in 22.19s.
In [41]:
pg.scatter(data, attrs=['louvain_labels', 'Channel'], basis='umap')

Differential Expression Analysis

With the clusters ready, we can now perform Differential Expression (DE) Analysis. DE analysis is to discover cluster-specific marker genes. For each cluster, it compares cells within the cluster with all the others, then finds genes significantly highly expressed (up-regulated) and lowly expressed (down-regulated) for the cluster.

Now use de_analysis method to run DE analysis. We use Louvain result here.

In [42]:
pg.de_analysis(data, cluster='louvain_labels', auc=False)
2020-08-11 22:11:57,206 - pegasus.tools.diff_expr - INFO - Collecting basic statistics is done. Time spent = 6.24s.
2020-08-11 22:11:57,963 - pegasus.tools.diff_expr - INFO - Welch's t-test is done. Time spent = 0.76s.
2020-08-11 22:11:58,103 - pegasus.tools.diff_expr - INFO - Differential expression analysis is finished. Time spent = 7.17s.

By default, DE analysis runs

  • auc: Area under ROC (AUROC) and Area under Precision-Recall (AUPR).
  • t: Welch’s t test.

Alternatively, you can also run the follow tests by setting their corresponding parameters to be True:

  • fisher: Fisher’s exact test.
  • mwu: Mann-Whitney U test.

DE analysis result is stored with key "de_res" (by default) in varm field of data. See documentation for more details.

To load the result in a human-readable format, use markers method:

In [43]:
marker_dict = pg.markers(data)

By default, markers:

  • Sort genes by WAD scores in descending order;
  • Use $\alpha = 0.05$ significance level on q-values for inference.

See documentation for customizing these parameters.

Let's see the up-regulated genes for Cluster 1, and rank them in descending order with respect to log fold change:

In [44]:
marker_dict['1']['up'].sort_values(by='log_fold_change', ascending=False)
Out[44]:
mean_logExpr mean_logExpr_other log_fold_change fold_change percentage percentage_other percentage_fold_change WAD_score t_pval t_qval t_score
feature
LTB 4.236970 1.906439 2.330531 10.283401 89.454254 41.375065 2.162033 8.277385e-01 0.000000 0.000000 95.0
TRAC 3.188862 1.291090 1.897772 6.671012 73.274475 29.639132 2.472221 4.915841e-01 0.000000 0.000000 67.0
BCL11B 2.660677 0.791365 1.869312 6.483831 65.264847 20.212074 3.229003 3.731599e-01 0.000000 0.000000 69.0
CD3D 3.179749 1.399669 1.780080 5.930328 75.232742 32.307167 2.328670 4.713309e-01 0.000000 0.000000 66.0
CD3E 2.850476 1.256603 1.593874 4.922781 69.502411 30.343765 2.290501 3.785164e-01 0.000000 0.000000 59.0
... ... ... ... ... ... ... ... ... ... ... ...
TRBV5-1 0.004508 0.000855 0.003653 1.003660 0.128411 0.023944 5.362990 1.547655e-06 0.025044 0.045036 2.0
AC007342.4 0.004519 0.000887 0.003631 1.003638 0.128411 0.027364 4.692616 1.547295e-06 0.026043 0.046621 2.0
DCDC2B 0.003608 0.000196 0.003412 1.003417 0.096308 0.006841 14.077849 1.137906e-06 0.021238 0.038576 2.0
AC104982.2 0.003596 0.000293 0.003303 1.003308 0.096308 0.010262 9.385233 1.117754e-06 0.025679 0.046065 2.0
DIRC3-AS1 0.002889 0.000000 0.002889 1.002893 0.080257 0.000000 inf 8.108671e-07 0.025687 0.046078 2.0

1429 rows × 11 columns

Among them, TRAC worth notification. It is a critical marker for T cells.

We can also use Volcano plot to see the DE result. Below is such a plot w.r.t. Cluster 1 with log fold change being the metric:

In [45]:
pg.volcano(data, cluster_id = '1', dpi=200)

The plot above uses the default thresholds: log fold change at $1$ (i.e. fold change at $2$), and q-value at $0.05$. Each point stands for a gene. Red ones are significant marker genes: those at right-hand side are up-regulated genes for Cluster 1, while those at left-hand side are down-regulated genes.

We can see that gene TRAC is the second to rightmost point, which is a significant up-regulated gene for Cluster 1.

To store a specific DE analysis result to file, you can write_results_to_excel methods in pegasus:

In [46]:
pg.write_results_to_excel(marker_dict, "MantonBM_subset.de.xlsx")
2020-08-11 22:12:42,399 - pegasus.tools.diff_expr - INFO - Excel spreadsheet is written. Time spent = 19.97s.

Cell Type Annotation

After done with DE analysis, we can use the test result to annotate the clusters.

In [47]:
celltype_dict = pg.infer_cell_types(data, markers = 'human_immune', de_test = 't')
cluster_names = pg.infer_cluster_names(celltype_dict)

infer_cell_types has 2 critical parameters to set:

  • markers: Either 'human_immune', 'mouse_immune', 'human_brain', 'mouse_brain', 'human_lung', or a user-specified marker dictionary.
  • de_test: Decide which DE analysis test to be used for cell type inference. It can be either 't', 'fisher', or 'mwu'.

infer_cluster_names by default uses threshold = 0.5 to filter out candidate cell types of scores lower than 0.5.

See documentation for details.

Below is the cell type annotation report for Cluster 1:

In [48]:
celltype_dict['1']
Out[48]:
[name: T cell; score: 1.00; average marker percentage: 65.55%; strong support: (CD3D+,75.23%),(CD3E+,69.50%),(CD3G+,44.17%),(TRAC+,73.27%),
 name: Plasma cell; score: 0.30; average marker percentage: 33.02%; strong support: (CD27+,33.02%),(MS4A1-,0.27%),
 name: Megakaryocyte; score: 0.17; average marker percentage: 35.88%; weak support: (GP5+,0.16%),(CXCR4+,71.61%),
 name: B cell; score: 0.00; average marker percentage: 0.00%,
 name: Natural killer cell; score: 0.00; average marker percentage: 0.00%,
 name: CD14+ Monocyte; score: 0.00; average marker percentage: 0.00%,
 name: CD16+ Monocyte; score: 0.00; average marker percentage: 0.00%,
 name: Conventional dendritic cell; score: 0.00; average marker percentage: 0.00%,
 name: Plasmacytoid dendritic cell; score: 0.00; average marker percentage: 0.00%,
 name: Hematopoietic stem cell; score: 0.00; average marker percentage: 0.00%,
 name: Erythroid cells; score: 0.00; average marker percentage: 0.00%,
 name: Neutrophil; score: 0.00; average marker percentage: 0.00%,
 name: Macrophage; score: 0.00; average marker percentage: 0.00%,
 name: Mast cell; score: 0.00; average marker percentage: 0.00%]

The report has a list of predicted cell types along with their scores and support genes for users to decide.

Next, substitute the inferred cluster names in data:

  1. Construct an annotation dictionary of the following format:
In [49]:
anno_dict = dict(zip(map(lambda n: str(n), range(1, len(cluster_names)+1)), cluster_names))
anno_dict
Out[49]:
{'1': 'Naive T cell',
 '2': 'CD14+ Monocyte',
 '3': 'Cytotoxic T cell',
 '4': 'B cell',
 '5': 'T helper cell',
 '6': 'Natural killer cell',
 '7': 'Cytotoxic T cell-2',
 '8': 'Erythroid cells',
 '9': 'CD14+ Monocyte-2',
 '10': 'Hematopoietic stem cell',
 '11': 'Pre B cell',
 '12': 'Erythroid cells-2',
 '13': 'CD1C+ cDC',
 '14': 'CD16+ Monocyte',
 '15': 'Plasmacytoid dendritic cell',
 '16': 'Plasma cell'}

The annotation dictionary has keys being cluster labels, and their putative cell types being values. In practice, users may want to manually create this dictionary by reading the report in celltype_dict.

  1. Use annotate function to rename cluster labels:
In [50]:
pg.annotate(data, name='anno', based_on='louvain_labels', anno_dict=anno_dict)
data.obs['anno'].value_counts()
Out[50]:
Naive T cell                   6230
CD14+ Monocyte                 4648
Cytotoxic T cell               4491
B cell                         4339
T helper cell                  3249
Natural killer cell            2792
Cytotoxic T cell-2             2693
Erythroid cells                1347
CD14+ Monocyte-2               1256
Hematopoietic stem cell        1019
Pre B cell                      934
Erythroid cells-2               701
CD1C+ cDC                       540
CD16+ Monocyte                  464
Plasmacytoid dendritic cell     384
Plasma cell                     378
Name: anno, dtype: int64

I use "anno" for the key name of the new cluster names.

Now plot data with cluster names:

In [51]:
pg.scatter(data, attrs='anno', basis='fitsne', dpi=100)
In [52]:
pg.scatter(data, attrs='anno', basis='umap', legend_loc='on data', dpi=150)

Raw Count vs Log-norm Count

Now let's check the count matrix:

In [53]:
data
Out[53]:
MultimodalData object with 1 UnimodalData: 'GRCh38-rna'
    It currently binds to UnimodalData object GRCh38-rna

UnimodalData object with n_obs x n_vars = 35465 x 25653
    Genome: GRCh38; Modality: rna
    It contains 2 matrices: 'X', 'raw.X'
    It currently binds to matrix 'X' as X

    obs: 'n_genes', 'Channel', 'n_counts', 'percent_mito', 'scale', 'Group', 'louvain_labels', 'anno'
    var: 'featureid', 'n_cells', 'percent_cells', 'robust', 'highly_variable_features', 'mean', 'var', 'hvf_loess', 'hvf_rank'
    obsm: 'X_pca', 'X_pca_harmony', 'X_fitsne', 'X_umap'
    varm: 'means', 'partial_sum', 'gmeans', 'gstds', 'de_res'
    uns: 'genome', 'modality', 'df_qcplot', 'Channels', 'Groups', 'ncells', 'gncells', 'c2gid', 'fmat_highly_variable_features', 'PCs', 'pca', 'pca_harmony_knn_indices', 'pca_harmony_knn_distances', 'W_pca_harmony'

You can see that besides X, there is another matrix raw.X generated for this analysis. As the key name indicates, raw.X stores the raw count matrix, which is the one after loading from the original Zarr file; while X stores the log-normalized counts.

data currently binds to matrix X. To use the raw count instead, type:

In [54]:
data.select_matrix('raw.X')
data
Out[54]:
MultimodalData object with 1 UnimodalData: 'GRCh38-rna'
    It currently binds to UnimodalData object GRCh38-rna

UnimodalData object with n_obs x n_vars = 35465 x 25653
    Genome: GRCh38; Modality: rna
    It contains 2 matrices: 'X', 'raw.X'
    It currently binds to matrix 'raw.X' as X

    obs: 'n_genes', 'Channel', 'n_counts', 'percent_mito', 'scale', 'Group', 'louvain_labels', 'anno'
    var: 'featureid', 'n_cells', 'percent_cells', 'robust', 'highly_variable_features', 'mean', 'var', 'hvf_loess', 'hvf_rank'
    obsm: 'X_pca', 'X_pca_harmony', 'X_fitsne', 'X_umap'
    varm: 'means', 'partial_sum', 'gmeans', 'gstds', 'de_res'
    uns: 'genome', 'modality', 'df_qcplot', 'Channels', 'Groups', 'ncells', 'gncells', 'c2gid', 'fmat_highly_variable_features', 'PCs', 'pca', 'pca_harmony_knn_indices', 'pca_harmony_knn_distances', 'W_pca_harmony'

Now data binds to raw counts.

Gene-specific Plots

You may want to check expression of a specific group of genes. Pegasus provides a number of plots on this purpose.

Let's check the following genes for example:

In [55]:
marker_genes = ['CD38', 'JCHAIN', 'FCGR3A', 'HLA-DPA1', 'CD14', 'CD79A', 'MS4A1', 'CD34', 'TRAC', 'CD3D', 'CD8A',
                'CD8B', 'GYPA', 'NKG7', 'CD4', 'SELL', 'CCR7']

Also, since most of the plots below use log-norm counts, I'll select matrix X for convenience:

In [56]:
data.select_matrix('X')

Violin Plot

In [57]:
pg.violin(data, attrs=marker_genes, groupby='anno', dpi=100)

Heatmap

By default, Pegasus uses average expressions within clusters for heatmap plotting:

In [58]:
pg.heatmap(data, genes=marker_genes, groupby='anno', row_cluster=True)

You can also show expressions of individual cells:

In [59]:
pg.heatmap(data, genes=marker_genes, groupby='anno', on_average=False)
/Users/yy939/mgh/miniconda3/envs/pegasus-python37/lib/python3.7/site-packages/seaborn/matrix.py:649: UserWarning: Clustering large matrix with scipy. Installing `fastcluster` may give better performance.
  warnings.warn(msg)

Dotplot

In [60]:
pg.dotplot(data, genes=marker_genes, groupby='anno')

Feature Plot

Feature plot uses the same method scatter when plotting cell embeddings, except that this time attrs are fed with gene names.

In [61]:
pg.scatter(data, attrs=['TRAC', 'CD79A', 'CD14', 'CD34'], basis='umap')

Dendrogram

Below shows hierarchical clustering result using expressions regarding marker_genes on clusters:

In [62]:
pg.dendrogram(data, genes=marker_genes, groupby='anno', dpi=100)

You can also plot dendrogram based on the corrected PCA coordinates of cells:

In [63]:
pg.dendrogram(data, groupby='anno', rep=pca_key, dpi=100)

Please see Plotting documentation for details on these plotting methods.

Save Result to File

Use write_output function to save analysis result data to file:

In [64]:
pg.write_output(data, "result.zarr.zip")
2020-08-11 22:13:11,939 - pegasusio.zarr_utils - WARNING - Detected and removed pre-existing file result.zarr.zip.
2020-08-11 22:13:13,182 - pegasusio.readwrite - INFO - zarr.zip file 'result.zarr.zip' is written.
2020-08-11 22:13:13,183 - pegasusio.readwrite - INFO - Function 'write_output' finished in 1.24s.

It's stored in zarr format, because this is the default file format in Pegasus.

Alternatively, you can also save it in h5ad, mtx, or loom format. See its documentation for instructions.